13 research outputs found
Permutohedral Attention Module for Efficient Non-Local Neural Networks
Medical image processing tasks such as segmentation often require capturing
non-local information. As organs, bones, and tissues share common
characteristics such as intensity, shape, and texture, the contextual
information plays a critical role in correctly labeling them. Segmentation and
labeling is now typically done with convolutional neural networks (CNNs) but
the context of the CNN is limited by the receptive field which itself is
limited by memory requirements and other properties. In this paper, we propose
a new attention module, that we call Permutohedral Attention Module (PAM), to
efficiently capture non-local characteristics of the image. The proposed method
is both memory and computationally efficient. We provide a GPU implementation
of this module suitable for 3D medical imaging problems. We demonstrate the
efficiency and scalability of our module with the challenging task of vertebrae
segmentation and labeling where context plays a crucial role because of the
very similar appearance of different vertebrae.Comment: Accepted at MICCAI-201
CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation
Domain Adaptation (DA) has recently raised strong interests in the medical
imaging community. While a large variety of DA techniques has been proposed for
image segmentation, most of these techniques have been validated either on
private datasets or on small publicly available datasets. Moreover, these
datasets mostly addressed single-class problems. To tackle these limitations,
the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in
conjunction with the 24th International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large
and multi-class benchmark for unsupervised cross-modality DA. The challenge's
goal is to segment two key brain structures involved in the follow-up and
treatment planning of vestibular schwannoma (VS): the VS and the cochleas.
Currently, the diagnosis and surveillance in patients with VS are performed
using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in
using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore,
we created an unsupervised cross-modality segmentation benchmark. The training
set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105).
The aim was to automatically perform unilateral VS and bilateral cochlea
segmentation on hrT2 as provided in the testing set (N=137). A total of 16
teams submitted their algorithm for the evaluation phase. The level of
performance reached by the top-performing teams is strikingly high (best median
Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice -
VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an
image-to-image translation approach to transform the source-domain images into
pseudo-target-domain images. A segmentation network was then trained using
these generated images and the manual annotations provided for the source
image.Comment: Submitted to Medical Image Analysi
Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
Image registration is a fundamental medical image analysis task, and a wide
variety of approaches have been proposed. However, only a few studies have
comprehensively compared medical image registration approaches on a wide range
of clinically relevant tasks. This limits the development of registration
methods, the adoption of research advances into practice, and a fair benchmark
across competing approaches. The Learn2Reg challenge addresses these
limitations by providing a multi-task medical image registration data set for
comprehensive characterisation of deformable registration algorithms. A
continuous evaluation will be possible at
https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of
anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR),
availability of annotations, as well as intra- and inter-patient registration
evaluation. We established an easily accessible framework for training and
validation of 3D registration methods, which enabled the compilation of results
of over 65 individual method submissions from more than 20 unique teams. We
used a complementary set of metrics, including robustness, accuracy,
plausibility, and runtime, enabling unique insight into the current
state-of-the-art of medical image registration. This paper describes datasets,
tasks, evaluation methods and results of the challenge, as well as results of
further analysis of transferability to new datasets, the importance of label
supervision, and resulting bias. While no single approach worked best across
all tasks, many methodological aspects could be identified that push the
performance of medical image registration to new state-of-the-art performance.
Furthermore, we demystified the common belief that conventional registration
methods have to be much slower than deep-learning-based methods
Inter Extreme Points Geodesics for End-to-End Weakly Supervised Image Segmentation
We introduce , a weakly supervised 3D approach to train
a deep image segmentation network using particularly weak train-time
annotations: only 6 extreme clicks at the boundary of the objects of interest.
Our fully-automatic method is trained end-to-end and does not require any
test-time annotations. From the extreme points, 3D bounding boxes are extracted
around objects of interest. Then, deep geodesics connecting extreme points are
generated to increase the amount of "annotated" voxels within the bounding
boxes. Finally, a weakly supervised regularised loss derived from a Conditional
Random Field formulation is used to encourage prediction consistency over
homogeneous regions. Extensive experiments are performed on a large open
dataset for Vestibular Schwannoma segmentation. obtained
competitive performance, approaching full supervision and outperforming
significantly other weakly supervised techniques based on bounding boxes.
Moreover, given a fixed annotation time budget,
outperforms full supervision. Our code and data are available online.Comment: Early accept at MICCAI 2021 - code available at:
https://github.com/ReubenDo/InExtremI
A multi-organ point cloud registration algorithm for abdominal CT registration
Registering CT images of the chest is a crucial step for several tasks such
as disease progression tracking or surgical planning. It is also a challenging
step because of the heterogeneous content of the human abdomen which implies
complex deformations. In this work, we focus on accurately registering a subset
of organs of interest. We register organ surface point clouds, as may typically
be extracted from an automatic segmentation pipeline, by expanding the Bayesian
Coherent Point Drift algorithm (BCPD). We introduce MO-BCPD, a multi-organ
version of the BCPD algorithm which explicitly models three important aspects
of this task: organ individual elastic properties, inter-organ motion coherence
and segmentation inaccuracy. This model also provides an interpolation
framework to estimate the deformation of the entire volume. We demonstrate the
efficiency of our method by registering different patients from the LITS
challenge dataset. The target registration error on anatomical landmarks is
almost twice as small for MO-BCPD compared to standard BCPD while imposing the
same constraints on individual organs deformation.Comment: Accepted at WBIR 202